A Variational Perspective on Accelerated Methods in Optimization

نویسندگان

  • Andre Wibisono
  • Ashia C. Wilson
  • Michael I. Jordan
چکیده

Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variational and Dynamical Perspectives On Learning and Optimization

Variational and Dynamical Perspectives On Learning and Optimization by Andre Yohannes Wibisono Doctor of Philosophy in Computer Science University of California, Berkeley Professor Michael Jordan, Chair The problem of learning from data is prevalent in the modern scientific age, and optimization provides a natural mathematical language for describing learning problems. We study some problems in...

متن کامل

Sequential Optimality Conditions and Variational Inequalities

In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...

متن کامل

Optimization of Solution Regularized Long-wave Equation by Using Modified Variational Iteration Method

In this paper, a regularized long-wave equation (RLWE) is solved by using the Adomian's decomposition method (ADM) , modified Adomian's decomposition method (MADM), variational iteration method (VIM), modified variational iteration method (MVIM) and homotopy analysis method (HAM). The approximate solution of this equation is calculated in the form of series which its components are computed by ...

متن کامل

Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms

We consider the problem of optimization of cost functionals on the infinite-dimensional manifold of diffeomorphisms. We present a new class of optimization methods, valid for any optimization problem setup on the space of diffeomorphisms by generalizing Nesterov accelerated optimization to the manifold of diffeomorphisms. While our framework is general for infinite dimensional manifolds, we spe...

متن کامل

A Variational Inequality Perspective on Generative Adversarial Nets

Stability has been a recurrent issue in training generative adversarial networks (GANs). One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods specifically designed for this adversarial training. In this work, we review the “variational inequality” framework which contains most formulation...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Proceedings of the National Academy of Sciences of the United States of America

دوره 113 47  شماره 

صفحات  -

تاریخ انتشار 2016